Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add filters

Database
Language
Document Type
Year range
1.
Qual Quant ; 55(1): 221-255, 2021.
Article in English | MEDLINE | ID: covidwho-824115

ABSTRACT

Sentiment research is dominated by studies that assign texts to positive and negative categories. This classification is often based on a bag-of-words approach that counts the frequencies of sentiment terms from a predefined vocabulary, ignoring the contexts for these words. We test an aspect-based network analysis model that computes sentiment about an entity from the shortest paths between the sentiment words and the target word across a corpus. Two ground-truth datasets in which human annotators judged whether tweets were positive or negative enabled testing the internal and external validity of the automated network-based method, evaluating the extent to which this approach's scoring corresponds to the annotations. We found that tweets annotated as negative had an automated negativity score that was nearly twice as strong than positivity, while positively annotated tweets were six times stronger in positivity than negativity. To assess the predictive validity of the approach, we analyzed sentiment associated with coronavirus coverage in television news from January 1 to March 25, 2020. Support was found for the four hypotheses tested, demonstrating the utility of the approach. H1: broadcast news expresses less sentiment about coronavirus, panic, and social distancing than non-broadcast news outlets. H2: there is a negative bias in the news across channels. H3: sentiment increases are associated with an increased volume of news stories. H4: sentiment is associated with uncertainty in news coverage of coronavirus over time. We also found that as the type of channel moved from broadcast network news to 24-h business, general, and foreign news sentiment increased for coronavirus, panic, and social distancing.

SELECTION OF CITATIONS
SEARCH DETAIL